proximity sensor
Few-shot transfer of tool-use skills using human demonstrations with proximity and tactile sensing
Aoyama, Marina Y., Vijayakumar, Sethu, Narita, Tetsuya
Tools extend the manipulation abilities of robots, much like they do for humans. Despite human expertise in tool manipulation, teaching robots these skills faces challenges. The complexity arises from the interplay of two simultaneous points of contact: one between the robot and the tool, and another between the tool and the environment. Tactile and proximity sensors play a crucial role in identifying these complex contacts. However, learning tool manipulation using these sensors remains challenging due to limited real-world data and the large sim-to-real gap. To address this, we propose a few-shot tool-use skill transfer framework using multimodal sensing. The framework involves pre-training the base policy to capture contact states common in tool-use skills in simulation and fine-tuning it with human demonstrations collected in the real-world target domain to bridge the domain gap. We validate that this framework enables teaching surface-following tasks using tools with diverse physical and geometric properties with a small number of demonstrations on the Franka Emika robot arm. Our analysis suggests that the robot acquires new tool-use skills by transferring the ability to recognise tool-environment contact relationships from pre-trained to fine-tuned policies. Additionally, combining proximity and tactile sensors enhances the identification of contact states and environmental geometry.
Evolution of Fear and Social Rewards in Prey-Predator Relationship
Fear is a critical brain function for detecting danger and learning to avoid specific stimuli that can lead to danger. While fear is believed to have evolved under pressure from predators, experimentally reproducing the evolution is challenging. To investigate the relationship between environmental conditions, the evolution of fear, and the evolution of other rewards, such as food reward and social reward, we developed a distributed evolutionary simulation. In our simulation, prey and predator agents co-evolve their innate reward functions, including a possibly fear-like term for observing predators, and learn behaviors via reinforcement learning. Surprisingly, our simulation revealed that social reward for observing the same species is more important for prey to survive, and fear-like negative reward for observing predators evolves only after acquiring social reward. We also found that the predator with increased hunting ability (larger mouth) amplified fear emergence, but also that fear evolution is more stable with non-evolving predators that are bad at chasing prey. Additionally, unlike for predators, we found that positive rewards evolve in opposition to fear for stationary threats, as areas with abundant leftover food develop around them. These findings suggest that fear and social reward have had a complex interplay with each other through evolution, along with the nature of predators and threats.
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture (0.41)
- Asia > Japan > Kyūshū & Okinawa > Okinawa (0.04)
Multimodal Limbless Crawling Soft Robot with a Kirigami Skin
Tirado, Jonathan, Parvaresh, Aida, Seyidoğlu, Burcu, Bedford, Darryl A., Jørgensen, Jonas, Rafsanjani, Ahmad
For limbless locomotion on flat surfaces, the absence of push points over the surface requires the coordination of body deformation and static friction to generate propulsive forces. The rhythmic contraction of earthworms' muscles produces p e ristaltic waves along their slender bodies [1] while friction - enhancing bristles on their skin, called setae, ensure a firm grip on the ground with each stride [2, 3] . The setae generate a directionally asymmetric friction that is easy to overcome in the direction of movement but strong enough to prevent sliding back . Thus, three fundamental elements of limbless locomotion on terrains with uniform roughness are large deformability, rhythmic contractions, and asymmetric friction . The limbless locomotion of earthworms has inspired the development of several crawling soft robots that replicate some of the ir morphological features, enabling them to crawl on uniform terrains [ 4, 5, 6 ], inside pipes [ 7, 8, 9 ], and through granular media [ 10, 11 ] . However, unifying all of these in a crawling robot remains unexplored. Additionally, many earthworm - inspired soft robots can only move in a straight line and do not possess steering capabilities, which limit s their applicability to unstructured real - world terrains. To replicat e body deformation, several researchers have developed worm - inspired soft robots powered by various actuation mechanisms.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Denmark > Southern Denmark (0.04)
DeCo: Task Decomposition and Skill Composition for Zero-Shot Generalization in Long-Horizon 3D Manipulation
Chen, Zixuan, Yin, Junhui, Chen, Yangtao, Huo, Jing, Tian, Pinzhuo, Shi, Jieqi, Hou, Yiwen, Li, Yinchuan, Gao, Yang
Generalizing language-conditioned multi-task imitation learning (IL) models to novel long-horizon 3D manipulation tasks remains a significant challenge. To address this, we propose DeCo (Task Decomposition and Skill Composition), a model-agnostic framework compatible with various multi-task IL models, designed to enhance their zero-shot generalization to novel, compositional, long-horizon 3D manipulation tasks. DeCo first decomposes IL demonstrations into a set of modular atomic tasks based on the physical interaction between the gripper and objects, and constructs an atomic training dataset that enables models to learn a diverse set of reusable atomic skills during imitation learning. At inference time, DeCo leverages a vision-language model (VLM) to parse high-level instructions for novel long-horizon tasks, retrieve the relevant atomic skills, and dynamically schedule their execution; a spatially-aware skill-chaining module then ensures smooth, collision-free transitions between sequential skills. We evaluate DeCo in simulation using DeCoBench, a benchmark specifically designed to assess zero-shot generalization of multi-task IL models in compositional long-horizon 3D manipulation. Across three representative multi-task IL models (RVT-2, 3DDA, and ARP), DeCo achieves success rate improvements of 66.67%, 21.53%, and 57.92%, respectively, on 12 novel compositional tasks. Moreover, in real-world experiments, a DeCo-enhanced model trained on only 6 atomic tasks successfully completes 9 novel long-horizon tasks, yielding an average success rate improvement of 53.33% over the base multi-task IL model. Video demonstrations are available at: https://deco226.github.io.
- Asia > Singapore (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
TetraGrip: Sensor-Driven Multi-Suction Reactive Object Manipulation in Cluttered Scenes
Torrado, Paolo, Levin, Joshua, Grotz, Markus, Smith, Joshua
Warehouse robotic systems equipped with vacuum grippers must reliably grasp a diverse range of objects from densely packed shelves. However, these environments present significant challenges, including occlusions, diverse object orientations, stacked and obstructed items, and surfaces that are difficult to suction. We introduce \tetra, a novel vacuum-based grasping strategy featuring four suction cups mounted on linear actuators. Each actuator is equipped with an optical time-of-flight (ToF) proximity sensor, enabling reactive grasping. We evaluate \tetra in a warehouse-style setting, demonstrating its ability to manipulate objects in stacked and obstructed configurations. Our results show that our RL-based policy improves picking success in stacked-object scenarios by 22.86\% compared to a single-suction gripper. Additionally, we demonstrate that TetraGrip can successfully grasp objects in scenarios where a single-suction gripper fails due to physical limitations, specifically in two cases: (1) picking an object occluded by another object and (2) retrieving an object in a complex scenario. These findings highlight the advantages of multi-actuated, suction-based grasping in unstructured warehouse environments. The project website is available at: \href{https://tetragrip.github.io/}{https://tetragrip.github.io/}.
iTrash: Incentivized Token Rewards for Automated Sorting and Handling
Ortega, Pablo, Ferrer, Eduardo Castelló
As robotic systems (RS) become more autonomous, they are becoming increasingly used in small spaces and offices to automate tasks such as cleaning, infrastructure maintenance, or resource management. In this paper, we propose iTrash, an intelligent trashcan that aims to improve recycling rates in small office spaces. For that, we ran a 5 day experiment and found that iTrash can produce an efficiency increase of more than 30% compared to traditional trashcans. The findings derived from this work, point to the fact that using iTrash not only increase recyclying rates, but also provides valuable data such as users behaviour or bin usage patterns, which cannot be taken from a normal trashcan. This information can be used to predict and optimize some tasks in these spaces. Finally, we explored the potential of using blockchain technology to create economic incentives for recycling, following a Save-as-you-Throw (SAYT) model.
- Europe > Austria > Vienna (0.14)
- Asia > Singapore (0.04)
- North America > United States > Massachusetts (0.04)
- (7 more...)
- Water & Waste Management > Solid Waste Management (1.00)
- Information Technology (1.00)
Gemini AI is coming to Google TV devices in 2025, making them easier to talk to
This week at CES, Google presented an early look at new software and hardware upgrades coming to Google TV devices. The new features include the integration of Gemini, Google's AI model, to the Google Assistant, as well as a new ambient experience. New smart TVs with Google TV will also gain far-field mics and proximity sensors to support the new software perks. If you've used a Google TV or Google streaming device, you may have already used the "hey Google" prompt to search for shows to watch. With the addition of Gemini, those "conversations" should now feel more natural.
- North America > United States > Nevada > Clark County > Las Vegas (0.06)
- North America > United States > Illinois > Cook County > Chicago (0.06)
Generating Whole-Body Avoidance Motion through Localized Proximity Sensing
Borelli, Simone, Giovinazzo, Francesco, Grella, Francesco, Cannata, Giorgio
This paper presents a novel control algorithm for robotic manipulators in unstructured environments using proximity sensors partially distributed on the platform. The proposed approach exploits arrays of multi zone Time-of-Flight (ToF) sensors to generate a sparse point cloud representation of the robot surroundings. By employing computational geometry techniques, we fuse the knowledge of robot geometric model with ToFs sensory feedback to generate whole-body motion tasks, allowing to move both sensorized and non-sensorized links in response to unpredictable events such as human motion. In particular, the proposed algorithm computes the pair of closest points between the environment cloud and the robot links, generating a dynamic avoidance motion that is implemented as the highest priority task in a two-level hierarchical architecture. Such a design choice allows the robot to work safely alongside humans even without a complete sensorization over the whole surface. Experimental validation demonstrates the algorithm effectiveness both in static and dynamic scenarios, achieving comparable performances with respect to well established control techniques that aim to move the sensors mounting positions on the robot body. The presented algorithm exploits any arbitrary point on the robot surface to perform avoidance motion, showing improvements in the distance margin up to 100 mm, due to the rendering of virtual avoidance tasks on non-sensorized links.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Italy > Liguria > Genoa (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
A Haptic-Based Proximity Sensing System for Buried Object in Granular Material
Zhang, Zeqing, Jia, Ruixing, Yan, Youcan, Han, Ruihua, Lin, Shijie, Jiang, Qian, Zhang, Liangjun, Pan, Jia
The proximity perception of objects in granular materials is significant, especially for applications like minesweeping. However, due to particles' opacity and complex properties, existing proximity sensors suffer from high costs from sophisticated hardware and high user-cost from unintuitive results. In this paper, we propose a simple yet effective proximity sensing system for underground stuff based on the haptic feedback of the sensor-granules interaction. We study and employ the unique characteristic of particles -- failure wedge zone, and combine the machine learning method -- Gaussian process regression, to identify the force signal changes induced by the proximity of objects, so as to achieve near-field perception. Furthermore, we design a novel trajectory to control the probe searching in granules for a wide range of perception. Also, our proximity sensing system can adaptively determine optimal parameters for robustness operation in different particles. Experiments demonstrate our system can perceive underground objects over 0.5 to 7 cm in advance among various materials.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Asia > China > Hong Kong (0.05)
- Europe > France > Occitanie > Hérault > Montpellier (0.04)
- (5 more...)
State Estimation and Environment Recognition for Articulated Structures via Proximity Sensors Distributed over the Whole Body
Iwao, Kengo, Arita, Hikaru, Tahara, Kenji
For robots with low rigidity, determining the robot's state based solely on kinematics is challenging. This is particularly crucial for a robot whose entire body is in contact with the environment, as accurate state estimation is essential for environmental interaction. We propose a method for simultaneous articulated robot posture estimation and environmental mapping by integrating data from proximity sensors distributed over the whole body. Our method extends the discrete-time model, typically used for state estimation, to the spatial direction of the articulated structure. The simulations demonstrate that this approach significantly reduces estimation errors.